Annotated POPL 2007 Program

POPL 2007 is just around the corner, and the Little Calculist has taken time off from inventing Web 4.0 to track down all the papers available online, and collect them in an annotated program, Favourite titles: Proving That Programs Eventually Do Something Good, and Lazy Multivariate Higher-Order Forward-Mode AD. I have no idea what the later means, but I love the pile-up of jargon.

Ninety-nine Lisp Problems

As Lemonodor says: 99 Problems But malloc Ain't One of Them.

Anyway, the list is based on on a list of Prolog challenges, but the solutions (when they exist) are in Lisp.

Berkeley Webcast Courses

UC Berkeley provides a webcast of the lectures for a number of their introductory college courses. Being immersed in SICP, I decided that it might be a good idea to listen to the lectures for CS 61A from Brian Harvey which uses SICP as the text. I don't know if going back to fundamentals will interest others here on LtU, but this is a good resource for a beginning CS computer course. Of course, being a series of some 40+ lectures during the course of a semester, it has both the advantages and disadvantages of learning the material through the academic setting (40+ hours is too long for casual learners).

A couple of tidbits. Although the course uses SICP as the text, it's used more as background material. The course has a certain Logo accent with the examples, with a preference for word and sentence problems rather than math problems - at least at the start of the course. It also jumps into subject matters like client-server and object-oriented programming that are a stretch of the text. Two of the lectures are occupied by videos from Alan Kay and Dan Ingalls. And the subjects of the lectures don't necessarily follow SICP in a sequential manner (the Scheme1 interpreter is interleaved with trees in chapter 2). That said, I liked seeing SICP presented from a different angle.

From a PL perspective, the most interesting piece is likely the lectures on Logo. One of the projects revolves around modifying the meta-circular evaluator to Logo. With Brian Harvey's knowledge of both Scheme (Simply Scheme) and Logo (Computer Science Logo Style), there is a lecture and a half (somewhere around the Meta-Circular subject) that goes into the parallels between Scheme and Logo. Also a discussion of the PL design decisions that went into Logo (dynamic scoping, namespace seperation,...).

Preliminary Fortress Interpreter

We'd like to announce availability of a preliminary, open source, interpreter implementing a small core of the Fortress programming language. This interpreter runs on the JVM. You can download the source code at: http://fortress.sunsource.net/.

Our intention is to grow this implementation over time, with the help of university partners and other interested third parties. We expect that many parts of this interpreter will be used as components of a complete Fortress compiler, which is our long term goal.

Fortress has been discussed here several times.

(Posted by Sukyoung Ryu to the TYPES forum.)

Interlanguage Migration: From Scripts to Programs

Tobin-Hochstandt & Felleisen, "Interlanguage Migration: From Scripts to Programs", (Dynamic Languages Symposium 2006)

As scripts grow into full-fledged applications, programmers
should want to port portions of their programs from script-
ing languages to languages with sound and rich type sys-
tems. This form of interlanguage migration ensures type-
safety and provides minimal guarantees for reuse in other
applications, too.
In this paper, we present a framework for expressing this
form of interlanguage migration. Given a program that con-
sists of modules in the untyped lambda calculus, we prove
that rewriting one of them in a simply typed lambda calcu-
lus produces an equivalent program and adds the expected
amount of type safety, i.e., code in typed modules can’t go
wrong. To ensure these guarantees, the migration process
infers constraints from the statically typed module and im-
poses them on the dynamically typed modules in the form
of behavioral contracts.

See also: Typed Scheme by Sam Tobin-Hochstadt:

...a version of PLT Scheme's core with an "occurrence type system".
.. Next we will develop a cross-language refactoring tool that assists programmers with the task of translating one module in a program system from the untyped language into the typed one. [The paper above] lays out the theoretical framework for this research.

(from Felleisen's Projects page)

Extending the Multilisp Sponsor Model

Extending the Multilisp Sponsor Model by Randy B. Osborne. 1993.
Speculative computing is a technique to improve the execution time of certain applications by starting some computations before it is known that the computations are required. A speculative computation will eventually become mandatory (i.e. required) or irrelevant (i.e. not required). In the absence of side effects irrelevant computations may be aborted. However, with side effects a computation which is irrelevant for the value it produces may still be relevant for the side effects it performs. One problem that can result is the "relevant synchronization" problem wherein one computation requires some side effect event (a "relevant synchronization") to be performed by another computation, which might be aborted, before the first computation can make progress. Another problem that can arise is the "preemptive delay" problem wherein a computation that will perform some awaited side effect event is preempted by a computation whose importance (e.g. priority) is less than that of computations waiting for the event. In this paper we show how the sponsor model developed for speculative computation in Multilisp can be extended to provide a novel solution to these two problems. The idea is for the computation awaiting some action, such as the production of a value or the release of a semaphore, to sponsor the computation or set of computations that will perform the awaited action. This sponsorship ensures that the awaited action executes, and executes with at least the waiter's level of importance. We show how to apply this technique to solve the above problems for several producer/consumer and semaphore applications. The idea extends naturally to other synchronization mechanisms.
Old news, one may say, but why the industry is continuing to ignore this knowledge (What really happened on Mars? -- Authoritative Account)?. See also the author's previous paper containing besides other information an overview of speculative computing in general: Speculative computation in Multilisp (1990).

A Dynamic Continuation-Passing Style for Dynamic Delimited Continuations

A Dynamic Continuation-Passing Style for Dynamic Delimited Continuations by Dariusz Biernacki, Olivier Danvy, Kevin Millikin. 2006.
Compared to static delimited continuations, and despite recent implementation advances, the topic of dynamic delimited continuations still remains largely unexplored. We believe that the spectrum of compatible computational artifacts presented here — abstract machine, evaluator, computational monad, and dynamic continuation-passing style — puts one in a better position to assess them.

Simon Peyton Jones: Beautiful concurrency

I am writing a chapter for a book called "Beautiful code", edited by Greg Wilson. My draft chapter is about Software Transactional Memory in Haskell.

I would welcome any comments or questions you have on the paper, or constructive suggestions for improving it; the more concrete the better.


The book is aimed at a general audience of programmers, not Haskell geeks, so I have tried to explain everything necessary as I go along. So if you are not a Haskell expert, your input would be particularly valuable to me.

You can post your comments on the Haskell wiki.

STM was discussed here many time before, of course. For me the original papers were easier to follow, but some may prefer the style of presentation used here.

Charming Python: Decorators make magic easy

Python made metaprogramming possible, but each Python version has added slightly different -- and not quite compatible -- wrinkles to the way you accomplish metaprogramming tricks. Playing with first-class function objects has long been around, as have techniques for peaking and poking at magic attributes. With version 2.2, Python grew a custom metaclass mechanism that went a long way, but at the cost of melting users' brains. More recently, with version 2.4, Python has grown "decorators," which are the newest -- and by far the most user-friendly way, so far -- to perform most metaprogramming.

While metaprogramming is inherently a bit confusing, I think this article could have a been a little clearer. Still, it's a nice highlevel introduction to decorators.

Matching Objects With Patterns

Matching Objects With Patterns. Burak Emir, Martin Odersky, and John Williams.

Data in object-oriented programming is organized in a hierarchy of classes. The problem of object-oriented pattern matching is how to explore this hierarchy from the outside. This usually involves classifying objects by their run-time type, accessing their members, or determining some other characteristic of a group of objects. In this paper we compare six different pattern matching techniques: object-oriented decomposition, visitors, type-tests/typecasts, typecase, case classes, and extractors. The techniques are compared on nine criteria related to conciseness, maintainability and performance. The paper introduces case classes and extractors as two new pattern-matching methods and shows that their combination works well for all of the established criteria.